Structs§
- Scans the provided input stream and outputs
Token
s as it detects them. - Returns each row as a Key => Value Mapping, rather than a simple list of values.
- Incredibly basic CSV reader.
- Flexible CSV writer, wherein one can specify the dialect and optional column headers
- A dialect represents the variations in how this record/field format can be encoded.
- A row represents a single Map line from a CSV table
Enums§
- Output from the Tokenizers as they detects individual tokens from the input stream.
Constants§
- Microsoft Excel tokenizer, effectively the same as RFC4180.
- Excel tab dialect, uses ‘\r\n’ for newlines and ‘\t’ for the field separator.
- Piped Field Dialect, uses vertical pipes ‘|’ for the field separators
- RFC4180 Dialect, uses the industry defaults ‘\r\n’ for record separator, and ‘,’ for field separator.
- Standard unix dialect, uses ‘\n’ instead of CRLF for line separators.
- Tab dialect, uses ‘\n’ for newlines and ‘\t’ for the field separator.
Traits§
- A Token Reader reads tokens from input stream
- A Token Writer writes the set of specified tokens to the output stream